首页> 外文OA文献 >Combining semantic and syntactic structure for language modeling
【2h】

Combining semantic and syntactic structure for language modeling

机译:结合语义和句法结构进行语言建模

摘要

Structured language models for speech recognition have been shown to remedythe weaknesses of n-gram models. All current structured language models are,however, limited in that they do not take into account dependencies betweennon-headwords. We show that non-headword dependencies contribute tosignificantly improved word error rate, and that a data-oriented parsing modeltrained on semantically and syntactically annotated data can exploit thesedependencies. This paper also contains the first DOP model trained by means ofa maximum likelihood reestimation procedure, which solves some of thetheoretical shortcomings of previous DOP models.
机译:已经显示出用于语音识别的结构化语言模型可以弥补n-gram模型的不足。但是,当前所有结构化语言模型的局限性在于它们没有考虑非非词性词之间的依赖性。我们表明,非单词依赖关系有助于显着提高单词错误率,并且在语义和句法注释数据上训练的面向数据的解析模型可以利用这些依赖关系。本文还包含通过最大似然重估计过程训练的第一个DOP模型,它解决了以前DOP模型的一些理论缺陷。

著录项

  • 作者

    Bod, Rens;

  • 作者单位
  • 年度 2001
  • 总页数
  • 原文格式 PDF
  • 正文语种 {"code":"en","name":"English","id":9}
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号